1,815 research outputs found

    Hospital implementation of health information technology and quality of care: are they related?

    Get PDF
    Recently, there has been considerable effort to promote the use of health information technology (HIT) in order to improve health care quality. However, relatively little is known about the extent to which HIT implementation is associated with hospital patient care quality. We undertook this study to determine the association of various HITs with: hospital quality improvement (QI) practices and strategies; adherence to process of care measures; risk-adjusted inpatient mortality; patient satisfaction; and assessment of patient care quality by hospital quality managers and front-line clinicians.This work was supported by a grant from the Commonwealth Fund. We are indebted to Anthony Shih and Anne-Marie Audet of the Fund for their advice, support, and constructive suggestions throughout the design and conduct of the study. We thank our colleagues - Raymond Kang, Peter Kralovec, Sally Holmes, Frances Margolin, and Deborah Bohr - for their valuable contributions to the development of the QAS, the CPS, and the database on which the analytic findings reported here were based. We also thank 3 M (TM) Health Information Systems' for use of its All Patient Refined Diagnosis Related Groups (APR-DRGs) software. We especially wish to thank Jennifer Drake for her contributions not only to survey development, but also to earlier analysis of survey findings relevant to this paper. (Commonwealth Fund)Published versio

    Contextual Object Detection with a Few Relevant Neighbors

    Full text link
    A natural way to improve the detection of objects is to consider the contextual constraints imposed by the detection of additional objects in a given scene. In this work, we exploit the spatial relations between objects in order to improve detection capacity, as well as analyze various properties of the contextual object detection problem. To precisely calculate context-based probabilities of objects, we developed a model that examines the interactions between objects in an exact probabilistic setting, in contrast to previous methods that typically utilize approximations based on pairwise interactions. Such a scheme is facilitated by the realistic assumption that the existence of an object in any given location is influenced by only few informative locations in space. Based on this assumption, we suggest a method for identifying these relevant locations and integrating them into a mostly exact calculation of probability based on their raw detector responses. This scheme is shown to improve detection results and provides unique insights about the process of contextual inference for object detection. We show that it is generally difficult to learn that a particular object reduces the probability of another, and that in cases when the context and detector strongly disagree this learning becomes virtually impossible for the purposes of improving the results of an object detector. Finally, we demonstrate improved detection results through use of our approach as applied to the PASCAL VOC and COCO datasets

    Generalization Error in Deep Learning

    Get PDF
    Deep learning models have lately shown great performance in various fields such as computer vision, speech recognition, speech translation, and natural language processing. However, alongside their state-of-the-art performance, it is still generally unclear what is the source of their generalization ability. Thus, an important question is what makes deep neural networks able to generalize well from the training set to new data. In this article, we provide an overview of the existing theory and bounds for the characterization of the generalization error of deep neural networks, combining both classical and more recent theoretical and empirical results

    Scalable and Interpretable One-class SVMs with Deep Learning and Random Fourier features

    Full text link
    One-class support vector machine (OC-SVM) for a long time has been one of the most effective anomaly detection methods and extensively adopted in both research as well as industrial applications. The biggest issue for OC-SVM is yet the capability to operate with large and high-dimensional datasets due to optimization complexity. Those problems might be mitigated via dimensionality reduction techniques such as manifold learning or autoencoder. However, previous work often treats representation learning and anomaly prediction separately. In this paper, we propose autoencoder based one-class support vector machine (AE-1SVM) that brings OC-SVM, with the aid of random Fourier features to approximate the radial basis kernel, into deep learning context by combining it with a representation learning architecture and jointly exploit stochastic gradient descent to obtain end-to-end training. Interestingly, this also opens up the possible use of gradient-based attribution methods to explain the decision making for anomaly detection, which has ever been challenging as a result of the implicit mappings between the input space and the kernel space. To the best of our knowledge, this is the first work to study the interpretability of deep learning in anomaly detection. We evaluate our method on a wide range of unsupervised anomaly detection tasks in which our end-to-end training architecture achieves a performance significantly better than the previous work using separate training.Comment: Accepted at European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD) 201

    Science teachers' pedagogical content knowledge development during enactment of socioscientific curriculum materials

    Get PDF
    The purpose of this study is to provide insight into short‐term professionalization of teachers regarding teaching socioscientific issues (SSI). The study aimed to capture the development of science teachers' pedagogical content knowledge (PCK) for SSI teaching by enacting specially designed SSI curriculum materials. The study also explores indicators of stronger and weaker development of PCK for SSI teaching. Thirty teachers from four countries (Cyprus, Israel, Norway, and Spain) used one module (30-60 min lesson) of SSI materials. The data were collected through: (a) lesson preparation form (PCK‐before), (b) lesson reflection form (PCK‐after), (c) lesson observation table (PCK‐in‐action). The data analysis was based on the PCK model of Magnusson, Krajcik, and Borko (1999). Strong development of PCK for SSI teaching includes 'Strong interconnections between the PCK components,' 'Understanding of students'difficulties in SSI learning,' 'Suggesting appropriate instructional strategies,' and 'Focusing equally on science content and SSI skills.' Our findings point to the importance of these aspects of PCK development for SSI teaching. We argue that when professional development programs and curriculum materials focus on developing these aspects, they will contribute to strong PCK development for SSI teaching. The findings regarding the development in the components of PCK for SSI provide compelling evidence that science teachers can develop aspects of their PCK for SSI with the use of a single module. Most of the teachers developed their knowledge about students' understanding of science and instructional strategies. The recognition of student difficulties made the teacher consider specific teaching strategies which are in line with the learning objectives. There is an evident link between the development of PCK in instructional strategies and students' understanding of science for SSI teaching

    Private Incremental Regression

    Full text link
    Data is continuously generated by modern data sources, and a recent challenge in machine learning has been to develop techniques that perform well in an incremental (streaming) setting. In this paper, we investigate the problem of private machine learning, where as common in practice, the data is not given at once, but rather arrives incrementally over time. We introduce the problems of private incremental ERM and private incremental regression where the general goal is to always maintain a good empirical risk minimizer for the history observed under differential privacy. Our first contribution is a generic transformation of private batch ERM mechanisms into private incremental ERM mechanisms, based on a simple idea of invoking the private batch ERM procedure at some regular time intervals. We take this construction as a baseline for comparison. We then provide two mechanisms for the private incremental regression problem. Our first mechanism is based on privately constructing a noisy incremental gradient function, which is then used in a modified projected gradient procedure at every timestep. This mechanism has an excess empirical risk of ≈d\approx\sqrt{d}, where dd is the dimensionality of the data. While from the results of [Bassily et al. 2014] this bound is tight in the worst-case, we show that certain geometric properties of the input and constraint set can be used to derive significantly better results for certain interesting regression problems.Comment: To appear in PODS 201
    • 

    corecore